Moca has open-sourced Agent Definition Language (ADL), a vendor-neutral specification intended to standardize how AI agents are defined, reviewed, and governed across frameworks and platforms. The project is released under the Apache 2.0 license and is positioned as a missing "definition layer" for AI agents, comparable to the role OpenAPI plays for APIs. ADL provides a declarative format for defining AI agents, including their identity, role, language model setup, tools, permissions, RAG data access, dependencies, and governance metadata like ownership and version history.
But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won't do exactly this; we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that. I guess it's on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren't real, but a Super Bowl ad is not where I would expect it.
Today, I'm talking with Alex Lintner, who is the CEO of technology and software solutions at Experian, the credit reporting company. Experian is one of those multinationals that's so big and convoluted that it has multiple CEOs all over the world, so Alex and I spent quite a lot of time talking through the Decoder questions just so I could understand how Experian is structured, how it functions, and how the kinds of decisions Alex makes actually work in practice.
Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist, Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new disruptive threat posed by hard-to-detect, malicious AI swarms infesting social media and messaging channels.
One year ago this week, Silicon Valley and Wall Street were shocked by the release of China's DeepSeek mobile app, which rivaled US-based large language models like ChatGPT by showing comparable performance on key benchmarks at a fraction of the cost while using less-advanced chips. DeepSeek opened a new chapter in the US-China rivalry, with the world recognizing the competitiveness of Chinese AI models, and Beijing pouring more resources into developing its own AI ecosystem.
Salesforce-owned integration platform provider MuleSoft has added a new feature called Agent Scanners to Agent Fabric - a suite of capabilities and tools that the company launched last year to rein in the growing challenge of agent sprawl across enterprises. Agent sprawl, often a result of enterprises and their technology teams adopting multiple agentic products, can lead to the fragmentation of agents, turning their workflows redundant or siloed across teams and platforms.
The country's top internet regulator, the Cyberspace Administration of China (CAC), requires that any company launching an AI tool with "public opinion properties or social mobilization capabilities" first file it in a public database: the algorithm registry. In a submission, developers must show how their products avoid 31 categories of risk, from age and gender discrimination to psychological harm to "violating core socialist values."
Taoiseach rejects suggestion that current legislation is not strong enough to deal with issue Women's Aid removes itself from X, calling the crisis a 'tipping point' Human rights lawyer Caoilfhionn Gallagher said such sexualised abuse of children online has 'devastating' impacts Ministers are scrambling to find a way to combat an explosion of digitally created images of semi-nude women and children on the social media platform X.
It's not only law firms and legal departments that are adopting GenAI systems without fully understanding what they can and cannot do - court systems may also be tempted to adopt these tools to short circuit workloads in the face of limited resources. And that poses some risks and concerns to the rule of law, a notion that hinges on accuracy, fairness, and public perception.
Across organizations of every size, I am seeing the same operational pattern take shape. Legal teams are carrying more work, adopting more technology, and fielding increasing demands from the business, yet the underlying infrastructure has not evolved at the same pace. The result is a readiness gap that grows quietly and gradually, often in the background of an otherwise high-functioning department. The encouraging part is that the leaders who recognize the pattern early are already finding practical ways to close it.
The Well‑Architected Framework, long used by architects to benchmark cloud workloads against pillars such as operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability, now incorporates AI-specific guidance across these pillars. The expanded lenses reflect AWS's recognition of the increasing complexity and societal impact of AI workloads, particularly those powered by generative models.
The flap of a butterfly's wings in South America can famously lead to a tornado in the Caribbean. The so-called butterfly effect-or "sensitive dependence on initial conditions," as it is more technically known-is of profound relevance for organizations seeking to deploy AI solutions. As systems become more and more interconnected by AI capabilities that sit across and reach into an increasing number of critical functions, the risk of cascade failures-localized glitches that ripple outward into organization-wide disruptions-grows substantially.
Enterprise IT execs know well the dangers of relying too much on third-parties, how automated decision systems need to always have a human in the loop, and the dangers of telling customers too much/too little when policy violations require an account shutdown. But a saga that played out Tuesday between Anthropic and the CEO of a Swiss cybersecurity company brings it all into a new and disturbing context.
We are now at the point where automation, machine learning and agentic orchestration can genuinely work together. This is not theory. It is already happening in defense and civilian agencies that have moved past pilots and into production, using agents that bring context, consistency and speed to complex workflows while preserving accountability. These seven principles for an agentic government give leaders a practical framework for adopting automation and AI responsibly.